Semi-supervised learning based methods are current SOTA solutions to the noisy-label learning problem, which rely on learning an unsupervised label cleaner first to divide the training samples into a labeled set for clean data and an unlabeled set for noise data. Typically, the cleaner is obtained via fitting a mixture model to the distribution of per-sample training losses. However, the modeling procedure is \emph{class agnostic} and assumes the loss distributions of clean and noise samples are the same across different classes. Unfortunately, in practice, such an assumption does not always hold due to the varying learning difficulty of different classes, thus leading to sub-optimal label noise partition criteria. In this work, we reveal this long-ignored problem and propose a simple yet effective solution, named \textbf{C}lass \textbf{P}rototype-based label noise \textbf{C}leaner (\textbf{CPC}). Unlike previous works treating all the classes equally, CPC fully considers loss distribution heterogeneity and applies class-aware modulation to partition the clean and noise data. CPC takes advantage of loss distribution modeling and intra-class consistency regularization in feature space simultaneously and thus can better distinguish clean and noise labels. We theoretically justify the effectiveness of our method by explaining it from the Expectation-Maximization (EM) framework. Extensive experiments are conducted on the noisy-label benchmarks CIFAR-10, CIFAR-100, Clothing1M and WebVision. The results show that CPC consistently brings about performance improvement across all benchmarks. Codes and pre-trained models will be released at \url{https://github.com/hjjpku/CPC.git}.
translated by 谷歌翻译
Optical flow estimation is a classical yet challenging task in computer vision. One of the essential factors in accurately predicting optical flow is to alleviate occlusions between frames. However, it is still a thorny problem for current top-performing optical flow estimation methods due to insufficient local evidence to model occluded areas. In this paper, we propose the Super Kernel Flow Network (SKFlow), a CNN architecture to ameliorate the impacts of occlusions on optical flow estimation. SKFlow benefits from the super kernels which bring enlarged receptive fields to complement the absent matching information and recover the occluded motions. We present efficient super kernel designs by utilizing conical connections and hybrid depth-wise convolutions. Extensive experiments demonstrate the effectiveness of SKFlow on multiple benchmarks, especially in the occluded areas. Without pre-trained backbones on ImageNet and with a modest increase in computation, SKFlow achieves compelling performance and ranks $\textbf{1st}$ among currently published methods on the Sintel benchmark. On the challenging Sintel clean and final passes (test), SKFlow surpasses the best-published result in the unmatched areas ($7.96$ and $12.50$) by $9.09\%$ and $7.92\%$. The code is available at \href{https://github.com/littlespray/SKFlow}{https://github.com/littlespray/SKFlow}.
translated by 谷歌翻译
群体模棱两可(例如,SE(3)均衡性)是科学的关键物理对称性,从经典和量子物理学到计算生物学。它可以在任意参考转换下实现强大而准确的预测。鉴于此,已经为将这种对称性编码为深神经网络而做出了巨大的努力,该网络已被证明可以提高下游任务的概括性能和数据效率。构建模棱两可的神经网络通常会带来高计算成本以确保表现力。因此,如何更好地折衷表现力和计算效率在模棱两可的深度学习模型的设计中起着核心作用。在本文中,我们提出了一个框架来构建可以有效地近似几何量的se(3)等效图神经网络。受差异几何形状和物理学的启发,我们向图形神经网络介绍了局部完整帧,因此可以将以给定订单的张量信息投射到框架上。构建本地框架以形成正常基础,以避免方向变性并确保完整性。由于框架仅是由跨产品操作构建的,因此我们的方法在计算上是有效的。我们在两个任务上评估我们的方法:牛顿力学建模和平衡分子构象的产生。广泛的实验结果表明,我们的模型在两种类型的数据集中达到了最佳或竞争性能。
translated by 谷歌翻译
具有大量空间和时间跨境的情景中的人重新识别(RE-ID)尚未完全探索。这部分原因是,现有的基准数据集主要由有限的空间和时间范围收集,例如,使用在校园特定区域的相机录制的视频中使用的视频。这种有限的空间和时间范围使得难以模拟真实情景中的人的困难。在这项工作中,我们贡献了一个新的大型时空上次最后一个数据集,包括10,862个图像,具有超过228k的图像。与现有数据集相比,最后一个具有挑战性和高度多样性的重新ID设置,以及显着更大的空间和时间范围。例如,每个人都可以出现在不同的城市或国家,以及在白天到夜间的各个时隙,以及春季到冬季的不同季节。为了我们的最佳知识,最后是一个新的Perse Re-ID数据集,具有最大的时空范围。基于最后,我们通过对14个RE-ID算法进行全面的绩效评估来验证其挑战。我们进一步提出了一种易于实施的基线,适用于如此挑战的重新ID设置。我们还验证了初步训练的模型可以在具有短期和更改方案的现有数据集中概括。我们期待持续激发未来的工程,以更现实和挑战的重新识别任务。有关DataSet的更多信息,请访问https://github.com/shuxjweb/last.git。
translated by 谷歌翻译
Vision Transformers (ViTs) outperforms convolutional neural networks (CNNs) in several vision tasks with its global modeling capabilities. However, ViT lacks the inductive bias inherent to convolution making it require a large amount of data for training. This results in ViT not performing as well as CNNs on small datasets like medicine and science. We experimentally found that masked autoencoders (MAE) can make the transformer focus more on the image itself, thus alleviating the data-hungry issue of ViT to some extent. Yet the current MAE model is too complex resulting in over-fitting problems on small datasets. This leads to a gap between MAEs trained on small datasets and advanced CNNs models still. Therefore, we investigated how to reduce the decoder complexity in MAE and found a more suitable architectural configuration for it with small datasets. Besides, we additionally designed a location prediction task and a contrastive learning task to introduce localization and invariance characteristics for MAE. Our contrastive learning task not only enables the model to learn high-level visual information but also allows the training of MAE's class token. This is something that most MAE improvement efforts do not consider. Extensive experiments have shown that our method shows state-of-the-art performance on standard small datasets as well as medical datasets with few samples compared to the current popular masked image modeling (MIM) and vision transformers for small datasets.The code and models are available at https://github.com/Talented-Q/SDMAE.
translated by 谷歌翻译
Co-speech gesture is crucial for human-machine interaction and digital entertainment. While previous works mostly map speech audio to human skeletons (e.g., 2D keypoints), directly generating speakers' gestures in the image domain remains unsolved. In this work, we formally define and study this challenging problem of audio-driven co-speech gesture video generation, i.e., using a unified framework to generate speaker image sequence driven by speech audio. Our key insight is that the co-speech gestures can be decomposed into common motion patterns and subtle rhythmic dynamics. To this end, we propose a novel framework, Audio-driveN Gesture vIdeo gEneration (ANGIE), to effectively capture the reusable co-speech gesture patterns as well as fine-grained rhythmic movements. To achieve high-fidelity image sequence generation, we leverage an unsupervised motion representation instead of a structural human body prior (e.g., 2D skeletons). Specifically, 1) we propose a vector quantized motion extractor (VQ-Motion Extractor) to summarize common co-speech gesture patterns from implicit motion representation to codebooks. 2) Moreover, a co-speech gesture GPT with motion refinement (Co-Speech GPT) is devised to complement the subtle prosodic motion details. Extensive experiments demonstrate that our framework renders realistic and vivid co-speech gesture video. Demo video and more resources can be found in: https://alvinliu0.github.io/projects/ANGIE
translated by 谷歌翻译
Facial expression recognition (FER) plays a significant role in the ubiquitous application of computer vision. We revisit this problem with a new perspective on whether it can acquire useful representations that improve FER performance in the image generation process, and propose a novel generative method based on the image inversion mechanism for the FER task, termed Inversion FER (IFER). Particularly, we devise a novel Adversarial Style Inversion Transformer (ASIT) towards IFER to comprehensively extract features of generated facial images. In addition, ASIT is equipped with an image inversion discriminator that measures the cosine similarity of semantic features between source and generated images, constrained by a distribution alignment loss. Finally, we introduce a feature modulation module to fuse the structural code and latent codes from ASIT for the subsequent FER work. We extensively evaluate ASIT on facial datasets such as FFHQ and CelebA-HQ, showing that our approach achieves state-of-the-art facial inversion performance. IFER also achieves competitive results in facial expression recognition datasets such as RAF-DB, SFEW and AffectNet. The code and models are available at https://github.com/Talented-Q/IFER-master.
translated by 谷歌翻译
Compared with the vanilla transformer, the window-based transformer offers a better trade-off between accuracy and efficiency. Although the window-based transformer has made great progress, its long-range modeling capabilities are limited due to the size of the local window and the window connection scheme. To address this problem, we propose a novel Token Transformer (TT). The core mechanism of TT is the addition of a Class (CLS) token for summarizing window information in each local window. We refer to this type of token interaction as CLS Attention. These CLS tokens will interact spatially with the tokens in each window to enable long-range modeling. In order to preserve the hierarchical design of the window-based transformer, we designed Feature Inheritance Module (FIM) in each phase of TT to deliver the local window information from the previous phase to the CLS token in the next phase. In addition, we have designed a Spatial-Channel Feedforward Network (SCFFN) in TT, which can mix CLS tokens and embedded tokens on the spatial domain and channel domain without additional parameters. Extensive experiments have shown that our TT achieves competitive results with low parameters in image classification and downstream tasks.
translated by 谷歌翻译
在目标属性下设计和生成新数据一直吸引着各种关键应用,例如分子设计,图像编辑和语音合成。传统手工制作的方法在很大程度上依赖于专业知识经验和强化人类的努力,但仍遭受科学知识和低吞吐量的不足,无法支持有效,有效的数据生成。最近,深度学习的进步引起了可以学习数据的基本表示和属性的表达方法。这种能力为弄清数据的结构模式和功能特性之间的相互关系提供了新的机会,并利用这种关系以生成所需属性的结构数据。本文对这个有前途的研究领域进行了系统的综述,通常称为可控制的深度数据生成。首先,提出了潜在的挑战,并提供了初步的挑战。然后,正式定义了可控的深度数据生成,提出了各种技术的分类法,并总结了该特定领域中的评估指标。之后,引入了可控制的深度数据生成的令人兴奋的应用程序,并对现有的作品进行了实验分析和比较。最后,突出显示了可控制的深度数据生成的有希望的未来方向,并确定了五个潜在的挑战。
translated by 谷歌翻译
扩散(基于得分)生成模型已被广泛用于建模各种类型的复杂数据,包括图像,音频和点云。最近,已经揭示了前向后的随机微分方程(SDE)和基于扩散的模型之间的深厚连接,并提出了几种新的SDE变体(例如,Sub-VP,批判性抑制的Langevin)。尽管手工制作的固定前进SDE取得了经验成功,但仍未探索大量适当的正向SDE。在这项工作中,我们提出了一个通用框架,用于参数化扩散模型,尤其是正向SDE的空间部分。引入了一种抽象的形式主义,并具有理论保证,并且它与以前的扩散模型的联系得到了利用。我们从优化的角度展示了我们方法的理论优势。还提出了关于合成数据集,矿工和CIFAR10的数值实验,以验证我们框架的有效性。
translated by 谷歌翻译